-
Notifications
You must be signed in to change notification settings - Fork 1
Scaffold initial orchestration tests #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scaffold initial orchestration tests #5
Conversation
# Conflicts: # azure/durable_functions/openai_agents/decorators.py # azure/durable_functions/openai_agents/model_invocation_activity.py
opentelemetry-api==1.32.1 | ||
opentelemetry-sdk==1.32.1 | ||
opentelemetry-sdk==1.32.1 | ||
openai==1.98.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wanted to double-check: my understanding is that adding openai and openai-agents here will not mean that every app installing azure-functions-durable will automatically install these packages as well, correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's my understanding as well--that these are requirements just for the repo. As far as I know, the dependencies of the package are specified in setup.py
. We do have to be careful, I believe, not to let imports of OpenAI dependencies leak into files that shouldn't necessarily require them. That is, to use "local" imports where necessary instead of imports at the top of the file.
context_builder, openai_agent_hello_world, uses_pystein=True) | ||
|
||
expected_state = base_expected_state() | ||
add_activity_action(expected_state, "{\"input\":[{\"content\":\"Tell me about recursion in programming.\",\"role\":\"user\"}],\"model_settings\":{\"temperature\":null,\"top_p\":null,\"frequency_penalty\":null,\"presence_penalty\":null,\"tool_choice\":null,\"parallel_tool_calls\":null,\"truncation\":null,\"max_tokens\":null,\"reasoning\":null,\"metadata\":null,\"store\":null,\"include_usage\":null,\"response_include\":null,\"extra_query\":null,\"extra_body\":null,\"extra_headers\":null,\"extra_args\":null},\"tracing\":0,\"model_name\":null,\"system_instructions\":\"You only respond in haikus.\",\"tools\":[],\"output_schema\":null,\"handoffs\":[],\"previous_response_id\":null,\"prompt\":null}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not necessarily right now, but I'm wondering if we could implement a builder utility to not have to deal with this raw json...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, absolutely, these tests are tedious to create right now. I just didn't want to create too many helpers up front until I see what's needed as a few more tests are added.
Scaffolds a couple of initial Open AI orchestration tests, for a "hello world" orchestration and orchestration using an activity as a tool. They're a bit tedious to write, as there is a lot of serialized data shuffled to the activities and back; there's a lot of room for improvement but I'd like to start having some start to validation of functionality.
Also, adds the ability to swap model providers (which will be needed for testing the activity logic, which I don't do...yet).